Cluster extraction and annotation strategies on tabular datasets with diverse feature types¶

Importing necessary libraries¶

In [1]:
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
from tensorflow import keras
import umap.umap_ as umap
%config InlineBackend.figure_format = 'svg'

Importing pre-processed data¶

In [2]:
np.random.seed(42)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 100)
data=pd.read_csv('Preprocessed_DM_xx.csv')
In [3]:
np.random.seed(42)
data=data.sample(frac=1) #Shuffle the data set

Feature engineering¶

  • Creating new feature called hypertension
  • Filtering unnecessary details
In [4]:
np.random.seed(42)
HTN_indexes=data.loc[(data['Currently.taking.a.prescribed.medicine.to.lower.BP'] != 0) | (data['First.SYSTOLIC.reading'] >= 140) | (data['First.DIASTOLIC.reading'] >= 90) ].index.values
HTN_cols=np.zeros(data.shape[0])
HTN_cols[[HTN_indexes]]=1
data['HTN']=HTN_cols
data=data.drop(["First.SYSTOLIC.reading","First.DIASTOLIC.reading","Currently.taking.a.prescribed.medicine.to.lower.BP"], axis=1)
data=data.reset_index(drop=True)
data.columns
data=data.drop(["Hb_adjust_alt_smok","Second.SYSTOLIC.reading","Second.DIASTOLIC.reading","Third.SYSTOLIC.reading","Third.DIASTOLIC.reading","Hb_status","Glucose.level",'SBP_status'], axis=1)
data=data.loc[data['BMI'] != 99.99]
data=data.loc[data['Hemoglobin.level..g.dl...1.decimal.'] != 99.99]
data=data.loc[data['Currently.has.asthma'] != .5]
data=data.loc[data['Currently.has.thyroid.disorder'] != .5]
data=data.loc[data['Currently.has.heart.disease'] != .5]
data=data.loc[data['Currently.has.cancer'] != .5]
data=data.loc[data['DM_history'] == 1]
data=data.loc[data['Type.of.caste.or.tribe.of.the.household.head'] != 0]
data=data.loc[data['Time.to.get.to.water.source..minutes.'] != -1]
data=data.drop(["Unnamed: 0","DM_status","DM_history"], axis=1)
In [5]:
np.random.seed(42)
i=[x for x in range(10125)]

data.set_index(pd.Series(i), inplace=True) # Reset the index

Spliting features¶

Creating 2 new dataframes: "data_disease" with features related to disease and "data_others" with rest of the features

In [6]:
data_disease= data[['Currently.has.asthma',
       'Currently.has.thyroid.disorder', 'Currently.has.heart.disease',
       'Currently.has.cancer', 'Suffers.from.TB','HTN']]
In [7]:
data_others= data[['Drinks.alcohol', 'Smoking_stat','Has.refrigerator',
       'Has.bicycle', 'Has.motorcycle.scooter', 'Has.car.truck', 'Owns.livestock..herds.or.farm.animals','Frequency.takes.milk.or.curd',
       'Frequency.eats.pulses.or.beans',
       'Frequency.eats.dark.green.leafy.vegetable', 'Frequency.eats.fruits',
       'Frequency.eats.eggs', 'Frequency.eats.fish',
       'Frequency.eats.chicken.or.meat', 'Frequency.eats.fried.food',
       'Frequency.takes.aerated.drinks','Frequency.household.members.smoke.inside.the.house','Wealth.index',
       'Highest.educational.level', 'Current.age','BMI','Hemoglobin.level..g.dl...1.decimal.','Time.to.get.to.water.source..minutes.', 'Household.head.s.religion', 'Sex', 'Type.of.place.of.residence', 'Household.structure',
       'Type.of.caste.or.tribe.of.the.household.head','Type.of.cooking.fuel','Source.of.drinking.water']]

Function for dimension reduction using UMAP¶

In [8]:
def feature_clustering(UMAP_neb,min_dist_UMAP, metric, data, visual):
    import umap.umap_ as umap
    np.random.seed(42)
    data_embedded = umap.UMAP(n_neighbors=UMAP_neb, min_dist=min_dist_UMAP, n_components=2, metric=metric, random_state=42).fit_transform(data)
    data_embedded[:,0]=(data_embedded[:,0]- np.mean(data_embedded[:,0]))/np.std(data_embedded[:,0])
    data_embedded[:,1]=(data_embedded[:,1]- np.mean(data_embedded[:,1]))/np.std(data_embedded[:,1])
    result = pd.DataFrame(data = data_embedded , 
        columns = ['UMAP_0', 'UMAP_1'])
    if visual==1:
        sns.lmplot( x="UMAP_0", y="UMAP_1",data=result,fit_reg=False,legend=False,scatter_kws={"s": 3},palette=customPalette_set1) # specify the point size
        #plt.savefig('clusters_umap_all.png', dpi=700, bbox_inches='tight')
        plt.show()
    else:
        pass
    return result

Dividing features¶

  • ord_list=ordinal features
  • cont_list=continueous features
  • nom_list=nominal features
In [9]:
ord_list=['Drinks.alcohol', 'Smoking_stat','Has.refrigerator',
       'Has.bicycle', 'Has.motorcycle.scooter', 'Has.car.truck', 'Owns.livestock..herds.or.farm.animals','Frequency.takes.milk.or.curd',
       'Frequency.eats.pulses.or.beans',
       'Frequency.eats.dark.green.leafy.vegetable', 'Frequency.eats.fruits',
       'Frequency.eats.eggs', 'Frequency.eats.fish',
       'Frequency.eats.chicken.or.meat', 'Frequency.eats.fried.food',
       'Frequency.takes.aerated.drinks','Frequency.household.members.smoke.inside.the.house','Wealth.index',
       'Highest.educational.level' ]
cont_list=['Current.age','BMI','Hemoglobin.level..g.dl...1.decimal.','Time.to.get.to.water.source..minutes.']
nom_list=['Household.head.s.religion', 'Sex', 'Type.of.place.of.residence', 'Household.structure',
       'Type.of.caste.or.tribe.of.the.household.head','Type.of.cooking.fuel','Source.of.drinking.water']

Function for Feature-type Distributed Clustering¶

Function parameters:¶

  • data=dataframe on which feature distributed clustering should be performed
  • cont_list=list of continueous features
  • nom_list=list of nominal features
  • ord_list=list of ordinal features
  • cont_metric=distance metric for continueous data
  • ord_metric=distance metric for ordinal data
  • nom_metric=distance metric for nominal data
  • drop_nominal=1(to drop nominal data) or 0(don't drop nominal data)
  • visual=1(to plot the data) or 0(don't plot the data)
In [10]:
def FDC(data,cont_list,nom_list,ord_list,cont_metric, ord_metric, nom_metric, drop_nominal, visual):
    np.random.seed(42)
    colors_set1 = ["lightcoral", "lightseagreen", "mediumorchid", "orange", "burlywood", "cornflowerblue", "plum", "yellowgreen"]
    customPalette_set1 = sns.set_palette(sns.color_palette(colors_set1))
    cont_df=data[cont_list]
    nom_df=data[nom_list]
    ord_df=data[ord_list]
    cont_emb=feature_clustering(30,0.1, cont_metric, cont_df, 0) #Reducing continueous features into 2dim
    ord_emb=feature_clustering(30,0.1, ord_metric, ord_df, 0) #Reducing ordinal features into 2dim
    nom_emb=feature_clustering(30,0.1, nom_metric, nom_df, 0) #Reducing nominal features into 2dim
    if drop_nominal==1:
        result_concat=pd.concat([ord_emb, cont_emb, nom_emb.drop(['UMAP_1'],axis=1)],axis=1) #concatinating all reduced dimensions to get 5D embedding(1D from nominal)
    else:
        result_concat=pd.concat([ord_emb, cont_emb, nom_emb],axis=1)
    data_embedded = umap.UMAP(n_neighbors=30, min_dist=0.001, n_components=2, metric='euclidean', random_state=42).fit_transform(result_concat) #reducing 5D embedding to 2D using UMAP
    result_reduced = pd.DataFrame(data = data_embedded , 
        columns = ['UMAP_0', 'UMAP_1'])
    
    if visual==1:
        sns.lmplot( x="UMAP_0", y="UMAP_1",data=result_reduced,fit_reg=False,legend=False,scatter_kws={"s": 3},palette=customPalette_set1) # specify the point size
        plt.show()
        #plt.savefig('clusters_umap_all.png', dpi=700, bbox_inches='tight')
    else:
        pass
    return result_concat, result_reduced #returns both 5D and 2D embedding
In [11]:
# applying Feature Distributed Clustering(FDC) on entire 10125 data with all features except disease features
entire_data_FDC_emb_five,entire_data_FDC_emb_two=FDC(data_others,cont_list,nom_list,ord_list,'euclidean','canberra','hamming',1,1)
2022-08-17T16:40:38.212692 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/

DBSCAN clustering on FDC embedding¶

In [12]:
def db_scan(eps,min_samples,two_d_embedding,visual, pal):
    from sklearn.cluster import DBSCAN
    dbscan = DBSCAN(eps=eps, min_samples = min_samples)
    clusters=dbscan.fit_predict(two_d_embedding)
    (values,counts) = np.unique(clusters,return_counts=True)
    two_d_embedding['Cluster'] = clusters
    
    if visual==1:
        sns.lmplot( x="UMAP_0", y="UMAP_1",
        data=two_d_embedding,
        fit_reg=False, 
        legend=True,
        hue='Cluster', # color by cluster
        scatter_kws={"s": 3},palette=pal) # specify the point size
        plt.savefig('dbscan_ref_2dim.png', dpi=700, bbox_inches='tight')
        plt.show()
    else:
        pass
    return two_d_embedding.Cluster.to_list(),counts
In [13]:
#setting color palette for visualization of clusters
colors_set1 = ['lightgray','lightcoral','cornflowerblue','orange','mediumorchid', 'lightseagreen','olive', 'chocolate','steelblue']
customPalette_set1 = sns.set_palette(sns.color_palette(colors_set1))


#Applying clustering algorithm on FDC embeddings from entire data
entire_data_cluster_list,entire_data_cluster_counts=db_scan(0.95,220,entire_data_FDC_emb_two,1,customPalette_set1)
2022-08-17T16:40:40.207424 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
In [14]:
#Getting noise indices
non_noise_indices= np.where(np.array(entire_data_cluster_list)!=-1)

#Removing noise/outlires from FDC embedding and from entire data
entire_data_FDC_emb_five= entire_data_FDC_emb_five.iloc[non_noise_indices]
entire_data_FDC_emb_two= entire_data_FDC_emb_two.iloc[non_noise_indices]
entire_data_cluster_list= np.array(entire_data_cluster_list)[non_noise_indices]
data_others= data_others.iloc[non_noise_indices]

#Creating new cloumn for storing cluster labels 
data_others['cluster_labels']= entire_data_cluster_list

#getting binary representation for cluster labels
data_others= pd.get_dummies(data=data_others, columns=['cluster_labels'])
In [15]:
#Getting column names of encoded cluster labels
cluster_column_names=data_others.columns[-len(np.unique(entire_data_cluster_list)):].to_list()

Dividing data set for experiments¶

In [16]:
#75%  of entire data for training
np.random.seed(42)
data=data_others.sample(frac=0.75) # Training data
In [17]:
#Another 25% of entire data for validation
np.random.seed(42)
data_val=data_others.drop(data.index) # Validation data

Dividing training data into 3 folds¶

In [18]:
#Dividing training data into three folds

np.random.seed(42)
df_1=data.sample(frac=0.33) #fold 1

df=data.drop(df_1.index)
df_2=df.sample(frac=0.51) #fold 2

df_3=df.drop(df_2.index) #fold 3
In [19]:
np.random.seed(42)
#Possible combinations of concating 2 folds for training and using remaining fold for testing
training_folds=[pd.concat([df_1,df_2],axis=0), pd.concat([df_2,df_3],axis=0), pd.concat([df_3,df_1],axis=0)]
testing_folds=[df_3,df_1,df_2]

Function for neural network¶

Function parameters:¶

  • n_features= dimension of input data
  • hidden_dim1= dimension of first hidden layer
  • hidden_dim2= dimension of second hidden layer
  • out_emb_size= dimension of output data
  • act1= first hidden layer activation function
  • act2= second hidden layer activation function
In [20]:
def neural_network(n_features,hidden_dim1,hidden_dim2,out_emb_size,act1,act2,loss):
    np.random.seed(42)
    tf.random.set_seed(42)
    model=keras.Sequential([
         keras.layers.Dense(hidden_dim1,input_dim=n_features,activation=act1),
         keras.layers.Dense(hidden_dim1,activation=act2),
         keras.layers.Dense(out_emb_size)])
    model.compile(optimizer="adam" ,
              loss=loss, 
              metrics=['mse'])
    return model    

Function for Cluster Incidence Matrix (CIM)¶

  • creating a matrix to evaluate the performance based on predicted cluster labels
In [21]:
def cluster_incidence_matrix_mod(cluster_list_new):
    np.random.seed(42)
    
    matrix=np.zeros((len(cluster_list_new),len(cluster_list_new)))
    for i in range(len(cluster_list_new)):
        for j in range(len(cluster_list_new)):
                if cluster_list_new[i]==cluster_list_new[j]:
                    matrix[i,j]=1 
                else:
                    pass
    
    return matrix 
In [22]:
#Function for decoding the encoded cluster labels
def label_decoder(label_dataframe):
    label_array=np.array(label_dataframe)
    decoded_labels=[]
    for i in label_array:
        max_val=np.argmax(i)
        decoded_labels.append(max_val)
    return decoded_labels
In [23]:
colnames=[]
for i in range(len(entire_data_FDC_emb_five.columns)):
    colnames.append('c'+str(i+1))
In [24]:
np.random.seed(42)
count=0
fold_readings=[]
while count<3:
    FDC_emb_five_train=entire_data_FDC_emb_five.loc[list(training_folds[count].index)] #5D FDC embedding of training folds from entire training data
    FDC_emb_two_train=entire_data_FDC_emb_two.loc[list(training_folds[count].index)] #2D embedding of training folds from entire training data
    FDC_emb_five_train.columns=colnames
    
    #Thirty dimensional data of training fold as features_matrix(X_train) 
    features_matrix=np.array(training_folds[count].drop(cluster_column_names, axis=1,inplace=False)) #X_train
    
    #Five dimensional FDC embedding of training fold as target_matrix(y_train)
    target_matrix=np.array(FDC_emb_five_train) #y_train
    
    #Train a neural network to get five dimensional embedding
    model_1=neural_network(len(features_matrix[0]),int(0.6*len(features_matrix[0])),int(0.36*len(features_matrix[0])),len(target_matrix[0]),"relu","sigmoid","mse")
    history=model_1.fit(features_matrix,target_matrix,epochs=30,batch_size=8)
    print('\n')
    print('Training history across epochs for fold ',count+1)
    plt.plot(history.history['mse'],'r')
    plt.ylabel('mse')
    plt.xlabel('epoch')
    plt.show()
    
    #Using same thirty dimensional features_matrix(X_train) from first neural network and encoded cluster labels of training fold as target_labels_matrix(y_train) 
    target_labels_matrix=np.array(training_folds[count].loc[:,cluster_column_names]) #y
    
    
    #Train a neural network to get encoded cluster labels
    model_2=neural_network(len(features_matrix[0]),int(0.6*len(features_matrix[0])),int(0.36*len(features_matrix[0])),len(target_labels_matrix[0]),"relu","softmax","mse")
    history=model_2.fit(features_matrix,target_labels_matrix,epochs=30,batch_size=8)
    print('\n')
    print('Training history across epochs for fold ',count+1)
    plt.plot(history.history['mse'],'r')
    plt.ylabel('mse')
    plt.xlabel('epoch')
    plt.show()
    
    #Decoding cluster labels of training fold
    decoded_target_labels_matrix=label_decoder(target_labels_matrix)

    #Actual encoded cluster labels of testing fold for metric calculation  
    ref_clusters=testing_folds[count].loc[:,cluster_column_names] 
    #Decoding encoded cluster labels of testing fold
    decoded_ref_clusters=label_decoder(ref_clusters)
    

    #predicting testing fold to get five dim embedding using trained model_1
    testing_data=testing_folds[count].drop(cluster_column_names, axis=1,inplace=False)
    predicted_5dim=pd.DataFrame(model_1.predict(testing_data), columns=colnames)
    
    #UMAP on predicted 5D embedding
    predicted_2dim=feature_clustering(30,0.01, "euclidean", predicted_5dim, 0)

    #predicting testing fold to get encoded cluster labels using trained model_2
    predicted_clusters=pd.DataFrame(model_2.predict(testing_data))
    
    #Decoding predicted encoded cluster labels
    decoded_predicted_clusters=label_decoder(predicted_clusters)
    
    
    #concatinating training and predicted 5D embedding
    concatenated_5dim=pd.concat([FDC_emb_five_train,predicted_5dim])
    
    #UMAP on concatinated embedding
    two_dim_viz=feature_clustering(30, 0.01, 'euclidean', concatenated_5dim, 0)
    
    #Concatinating decoded cluster labels of training fold and predicted testing fold
    concatenated_cluster_labels=np.concatenate([np.array(decoded_target_labels_matrix),np.array(decoded_predicted_clusters)+len(np.unique(decoded_target_labels_matrix))])
    
    two_dim_viz['Cluster']= concatenated_cluster_labels
    
    
    #Setting dark colors for training folds    
    darkerhues=['lightcoral','cornflowerblue','orange','mediumorchid', 'lightseagreen','olive', 'chocolate','steelblue']
    colors_set2=[]
    for i in range(len(np.unique(decoded_target_labels_matrix))):
        colors_set2.append(darkerhues[i])
    
    #Concatinating dark colors for training folds and corresponding light colors for testing folds
    colors_set2=colors_set2+["lightpink", 'skyblue', 'wheat', "plum","paleturquoise",  "lightgreen",  'burlywood','lightsteelblue']
    
    print('Vizualization for FDC for training fold (shown in dark hue) '+str(count+1) + 'and predicted clusters from neural network on testing fold (shown in corresponding light hues) '+str(count+1))
    
    #visualizing the clusters of both training and testing folds
    sns.lmplot( x="UMAP_0", y="UMAP_1", data=two_dim_viz, fit_reg=False, legend=False, hue='Cluster', scatter_kws={"s": 3},palette=sns.set_palette(sns.color_palette(colors_set2))) 
    plt.show()
    
    #Metric calculation

    CIM_predicted=cluster_incidence_matrix_mod(np.array(decoded_predicted_clusters))#Cluster incidence metric for predicted clusters
    CIM_reference=cluster_incidence_matrix_mod(np.array(decoded_ref_clusters))#Cluster incidence metric for reference clusters
    Product=np.dot(CIM_predicted,CIM_reference)
    cluster_incdences_in_data=np.sum(CIM_reference,axis=1)  
    mean_points_in_same_clusters=np.mean(np.diagonal(Product)/cluster_incdences_in_data)
    fold_readings.append(mean_points_in_same_clusters*100)
    
    print("Average percentage of patients belongs to the same cluster is: {}%".format(mean_points_in_same_clusters*100))
    print('\n')
    count+=1


print('\n')
print('\n')
Epoch 1/30
614/614 [==============================] - 1s 1ms/step - loss: 0.6988 - mse: 0.6988
Epoch 2/30
614/614 [==============================] - 1s 1ms/step - loss: 0.4060 - mse: 0.4060
Epoch 3/30
614/614 [==============================] - 1s 1ms/step - loss: 0.3304 - mse: 0.3304
Epoch 4/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2957 - mse: 0.2957
Epoch 5/30
614/614 [==============================] - 1s 993us/step - loss: 0.2778 - mse: 0.2778
Epoch 6/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2664 - mse: 0.2664
Epoch 7/30
614/614 [==============================] - 1s 994us/step - loss: 0.2585 - mse: 0.2585
Epoch 8/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2533 - mse: 0.2533
Epoch 9/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2479 - mse: 0.2479
Epoch 10/30
614/614 [==============================] - 1s 999us/step - loss: 0.2445 - mse: 0.2445
Epoch 11/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2416 - mse: 0.2416
Epoch 12/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2383 - mse: 0.2383
Epoch 13/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2361 - mse: 0.2361
Epoch 14/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2330 - mse: 0.2330
Epoch 15/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2306 - mse: 0.2306
Epoch 16/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2285 - mse: 0.2285
Epoch 17/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2262 - mse: 0.2262
Epoch 18/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2240 - mse: 0.2240
Epoch 19/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2203 - mse: 0.2203
Epoch 20/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2189 - mse: 0.2189
Epoch 21/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2167 - mse: 0.2167
Epoch 22/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2141 - mse: 0.2141
Epoch 23/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2118 - mse: 0.2118
Epoch 24/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2092 - mse: 0.2092
Epoch 25/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2069 - mse: 0.2069
Epoch 26/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2062 - mse: 0.2062
Epoch 27/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2043 - mse: 0.2043
Epoch 28/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2021 - mse: 0.2021
Epoch 29/30
614/614 [==============================] - 1s 1ms/step - loss: 0.2008 - mse: 0.2008
Epoch 30/30
614/614 [==============================] - 1s 1ms/step - loss: 0.1985 - mse: 0.1985


Training history across epochs for fold  1
2022-08-17T16:41:02.102769 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Epoch 1/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0823 - mse: 0.0823
Epoch 2/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0447 - mse: 0.0447
Epoch 3/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0298 - mse: 0.0298
Epoch 4/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0237 - mse: 0.0237
Epoch 5/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0198 - mse: 0.0198
Epoch 6/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0172 - mse: 0.0172
Epoch 7/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0157 - mse: 0.0157
Epoch 8/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0155 - mse: 0.0155
Epoch 9/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0150 - mse: 0.0150
Epoch 10/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0146 - mse: 0.0146
Epoch 11/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0143 - mse: 0.0143
Epoch 12/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0137 - mse: 0.0137
Epoch 13/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0136 - mse: 0.0136
Epoch 14/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0132 - mse: 0.0132
Epoch 15/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0126 - mse: 0.0126
Epoch 16/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0127 - mse: 0.0127
Epoch 17/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0125 - mse: 0.0125
Epoch 18/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0123 - mse: 0.0123
Epoch 19/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0119 - mse: 0.0119
Epoch 20/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0119 - mse: 0.0119
Epoch 21/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0117 - mse: 0.0117
Epoch 22/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0117 - mse: 0.0117
Epoch 23/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0115 - mse: 0.0115
Epoch 24/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0113 - mse: 0.0113
Epoch 25/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0111 - mse: 0.0111
Epoch 26/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0110 - mse: 0.0110
Epoch 27/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0111 - mse: 0.0111
Epoch 28/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0107 - mse: 0.0107
Epoch 29/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0104 - mse: 0.0104
Epoch 30/30
614/614 [==============================] - 1s 1ms/step - loss: 0.0106 - mse: 0.0106


Training history across epochs for fold  1
2022-08-17T16:41:23.695297 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
75/75 [==============================] - 0s 836us/step
75/75 [==============================] - 0s 1ms/step
Vizualization for FDC for training fold (shown in dark hue) 1and predicted clusters from neural network on testing fold (shown in corresponding light hues) 1
2022-08-17T16:41:48.966738 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Average percentage of patients belongs to the same cluster is: 91.62979324323682%


Epoch 1/30
612/612 [==============================] - 1s 1ms/step - loss: 0.7087 - mse: 0.7087
Epoch 2/30
612/612 [==============================] - 1s 1ms/step - loss: 0.4153 - mse: 0.4153
Epoch 3/30
612/612 [==============================] - 1s 1ms/step - loss: 0.3362 - mse: 0.3362
Epoch 4/30
612/612 [==============================] - 1s 1ms/step - loss: 0.3000 - mse: 0.3000
Epoch 5/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2799 - mse: 0.2799
Epoch 6/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2666 - mse: 0.2666
Epoch 7/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2582 - mse: 0.2582
Epoch 8/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2522 - mse: 0.2522
Epoch 9/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2477 - mse: 0.2477
Epoch 10/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2440 - mse: 0.2440
Epoch 11/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2410 - mse: 0.2410
Epoch 12/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2375 - mse: 0.2375
Epoch 13/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2341 - mse: 0.2341
Epoch 14/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2309 - mse: 0.2309
Epoch 15/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2282 - mse: 0.2282
Epoch 16/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2250 - mse: 0.2250
Epoch 17/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2231 - mse: 0.2231
Epoch 18/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2188 - mse: 0.2188
Epoch 19/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2156 - mse: 0.2156
Epoch 20/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2122 - mse: 0.2122
Epoch 21/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2084 - mse: 0.2084
Epoch 22/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2059 - mse: 0.2059
Epoch 23/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2032 - mse: 0.2032
Epoch 24/30
612/612 [==============================] - 1s 1ms/step - loss: 0.2007 - mse: 0.2007
Epoch 25/30
612/612 [==============================] - 1s 1ms/step - loss: 0.1983 - mse: 0.1983
Epoch 26/30
612/612 [==============================] - 1s 1ms/step - loss: 0.1978 - mse: 0.1978
Epoch 27/30
612/612 [==============================] - 1s 1ms/step - loss: 0.1948 - mse: 0.1948
Epoch 28/30
612/612 [==============================] - 1s 1ms/step - loss: 0.1936 - mse: 0.1936
Epoch 29/30
612/612 [==============================] - 1s 1ms/step - loss: 0.1918 - mse: 0.1918
Epoch 30/30
612/612 [==============================] - 1s 1ms/step - loss: 0.1907 - mse: 0.1907


Training history across epochs for fold  2
2022-08-17T16:42:14.460403 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Epoch 1/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0833 - mse: 0.0833
Epoch 2/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0466 - mse: 0.0466
Epoch 3/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0320 - mse: 0.0320
Epoch 4/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0252 - mse: 0.0252
Epoch 5/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0199 - mse: 0.0199
Epoch 6/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0170 - mse: 0.0170
Epoch 7/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0157 - mse: 0.0157
Epoch 8/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0153 - mse: 0.0153
Epoch 9/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0147 - mse: 0.0147
Epoch 10/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0147 - mse: 0.0147
Epoch 11/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0145 - mse: 0.0145
Epoch 12/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0145 - mse: 0.0145
Epoch 13/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0144 - mse: 0.0144
Epoch 14/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0142 - mse: 0.0142
Epoch 15/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0141 - mse: 0.0141
Epoch 16/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0140 - mse: 0.0140
Epoch 17/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0141 - mse: 0.0141
Epoch 18/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0139 - mse: 0.0139
Epoch 19/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0140 - mse: 0.0140
Epoch 20/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0138 - mse: 0.0138
Epoch 21/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0139 - mse: 0.0139
Epoch 22/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0137 - mse: 0.0137
Epoch 23/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0135 - mse: 0.0135
Epoch 24/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0136 - mse: 0.0136
Epoch 25/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0134 - mse: 0.0134
Epoch 26/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0135 - mse: 0.0135
Epoch 27/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0133 - mse: 0.0133
Epoch 28/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0131 - mse: 0.0131
Epoch 29/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0129 - mse: 0.0129
Epoch 30/30
612/612 [==============================] - 1s 1ms/step - loss: 0.0123 - mse: 0.0123


Training history across epochs for fold  2
2022-08-17T16:42:37.228709 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
76/76 [==============================] - 0s 745us/step
76/76 [==============================] - 0s 1ms/step
Vizualization for FDC for training fold (shown in dark hue) 2and predicted clusters from neural network on testing fold (shown in corresponding light hues) 2
2022-08-17T16:43:00.837247 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Average percentage of patients belongs to the same cluster is: 93.05013173088844%


Epoch 1/30
602/602 [==============================] - 1s 1ms/step - loss: 0.7123 - mse: 0.7123
Epoch 2/30
602/602 [==============================] - 1s 1ms/step - loss: 0.4330 - mse: 0.4330
Epoch 3/30
602/602 [==============================] - 1s 1ms/step - loss: 0.3475 - mse: 0.3475
Epoch 4/30
602/602 [==============================] - 1s 1ms/step - loss: 0.3105 - mse: 0.3105
Epoch 5/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2894 - mse: 0.2894
Epoch 6/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2743 - mse: 0.2743
Epoch 7/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2655 - mse: 0.2655
Epoch 8/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2586 - mse: 0.2586
Epoch 9/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2539 - mse: 0.2539
Epoch 10/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2502 - mse: 0.2502
Epoch 11/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2449 - mse: 0.2449
Epoch 12/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2393 - mse: 0.2393
Epoch 13/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2353 - mse: 0.2353
Epoch 14/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2306 - mse: 0.2306
Epoch 15/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2277 - mse: 0.2277
Epoch 16/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2251 - mse: 0.2251
Epoch 17/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2219 - mse: 0.2219
Epoch 18/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2206 - mse: 0.2206
Epoch 19/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2178 - mse: 0.2178
Epoch 20/30
602/602 [==============================] - 1s 2ms/step - loss: 0.2158 - mse: 0.2158
Epoch 21/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2143 - mse: 0.2143
Epoch 22/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2120 - mse: 0.2120
Epoch 23/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2108 - mse: 0.2108
Epoch 24/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2088 - mse: 0.2088
Epoch 25/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2068 - mse: 0.2068
Epoch 26/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2047 - mse: 0.2047
Epoch 27/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2026 - mse: 0.2026
Epoch 28/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2002 - mse: 0.2002
Epoch 29/30
602/602 [==============================] - 1s 1ms/step - loss: 0.2006 - mse: 0.2006
Epoch 30/30
602/602 [==============================] - 1s 1ms/step - loss: 0.1989 - mse: 0.1989


Training history across epochs for fold  3
2022-08-17T16:43:25.563863 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Epoch 1/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0848 - mse: 0.0848
Epoch 2/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0488 - mse: 0.0488
Epoch 3/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0317 - mse: 0.0317
Epoch 4/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0239 - mse: 0.0239
Epoch 5/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0195 - mse: 0.0195
Epoch 6/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0173 - mse: 0.0173
Epoch 7/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0164 - mse: 0.0164
Epoch 8/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0161 - mse: 0.0161
Epoch 9/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0153 - mse: 0.0153
Epoch 10/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0155 - mse: 0.0155
Epoch 11/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0151 - mse: 0.0151
Epoch 12/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0152 - mse: 0.0152
Epoch 13/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0148 - mse: 0.0148
Epoch 14/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0149 - mse: 0.0149
Epoch 15/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0151 - mse: 0.0151
Epoch 16/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0147 - mse: 0.0147
Epoch 17/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0146 - mse: 0.0146
Epoch 18/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0144 - mse: 0.0144
Epoch 19/30
602/602 [==============================] - 1s 2ms/step - loss: 0.0146 - mse: 0.0146
Epoch 20/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0145 - mse: 0.0145
Epoch 21/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0144 - mse: 0.0144
Epoch 22/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0146 - mse: 0.0146
Epoch 23/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0145 - mse: 0.0145
Epoch 24/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0142 - mse: 0.0142
Epoch 25/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0147 - mse: 0.0147
Epoch 26/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0141 - mse: 0.0141
Epoch 27/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0141 - mse: 0.0141
Epoch 28/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0142 - mse: 0.0142
Epoch 29/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0145 - mse: 0.0145
Epoch 30/30
602/602 [==============================] - 1s 1ms/step - loss: 0.0140 - mse: 0.0140


Training history across epochs for fold  3
2022-08-17T16:43:47.232927 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
78/78 [==============================] - 0s 829us/step
78/78 [==============================] - 0s 1ms/step
Vizualization for FDC for training fold (shown in dark hue) 3and predicted clusters from neural network on testing fold (shown in corresponding light hues) 3
2022-08-17T16:44:11.921810 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Average percentage of patients belongs to the same cluster is: 94.66526678872079%






In [25]:
print('Average percentage of patients belonging to the same cluster over all three folds:', np.mean(np.array(fold_readings)))
Average percentage of patients belonging to the same cluster over all three folds: 93.11506392094867

Validation¶

In [26]:
np.random.seed(42)

FDC_emb_five_data=entire_data_FDC_emb_five.loc[list(data.index)] #5D FDC embedding of training fold from entire data
FDC_emb_two_data=entire_data_FDC_emb_two.loc[list(data.index)] #2D embedding of training fold from entire data
FDC_emb_five_data.columns=colnames

#Thirty dimensional data of training fold as features_matrix(X_train) 
features_matrix=np.array(data.drop(cluster_column_names, axis=1,inplace=False)) #X_train

#Five dimensional FDC embedding of training fold as target_matrix(y_train)
target_matrix=np.array(FDC_emb_five_data) #y_train

#Train a neural network to get five dimensional embedding
model_1=neural_network(len(features_matrix[0]),int(0.6*len(features_matrix[0])),int(0.36*len(features_matrix[0])),len(target_matrix[0]),"relu","sigmoid","mse")
history=model_1.fit(features_matrix,target_matrix,epochs=30,batch_size=8)
print('\n')
print('Training history across epochs for training data ')
plt.plot(history.history['mse'],'r')
plt.ylabel('mse')
plt.xlabel('epoch')
plt.show()

#Using same thirty dimensional features_matrix(X_train) from first neural network and encoded cluster labels of training fold as target_labels_matrix(y_train) 
target_labels_matrix=np.array(data.loc[:,cluster_column_names]) #y


#Train a neural network to get encoded cluster labels
model_2=neural_network(len(features_matrix[0]),int(0.6*len(features_matrix[0])),int(0.36*len(features_matrix[0])),len(target_labels_matrix[0]),"relu","softmax","mse")
history=model_2.fit(features_matrix,target_labels_matrix,epochs=30,batch_size=8)
print('\n')
print('Training history across epochs for training data ')
plt.plot(history.history['mse'],'r')
plt.ylabel('mse')
plt.xlabel('epoch')
plt.show()

#Decoding cluster labels of training fold
decoded_target_labels_matrix=label_decoder(target_labels_matrix)

#Actual encoded cluster labels of validation data for metric calculation  
ref_clusters=data_val.loc[:,cluster_column_names] 
#Decoding encoded cluster labels of validation data
decoded_ref_clusters=label_decoder(ref_clusters)


#predicting validation data to get five dim embedding using trained model_1
validation_data=data_val.drop(cluster_column_names, axis=1,inplace=False)
predicted_5dim=pd.DataFrame(model_1.predict(validation_data), columns=colnames)

#UMAP on predicted 5D embedding
predicted_2dim=feature_clustering(30,0.01, "euclidean", predicted_5dim, 0)

#predicting validation data to get encoded cluster labels using trained model_2
predicted_clusters=pd.DataFrame(model_2.predict(validation_data))

#Decoding predicted encoded cluster labels
decoded_predicted_clusters=label_decoder(predicted_clusters)


#concatinating training and predicted 5D embedding
concatenated_5dim=pd.concat([FDC_emb_five_data,predicted_5dim])

#UMAP on concatinated embedding
two_dim_viz=feature_clustering(30, 0.01, 'euclidean', concatenated_5dim, 0)

#Concatinating decoded cluster labels of training data and predicted validation data
concatenated_cluster_labels=np.concatenate([np.array(decoded_target_labels_matrix),np.array(decoded_predicted_clusters)+len(np.unique(decoded_target_labels_matrix))])

two_dim_viz['Cluster']= concatenated_cluster_labels



#Setting dark colors for training data    
darkerhues=['lightcoral','cornflowerblue','orange','mediumorchid', 'lightseagreen','olive', 'chocolate','steelblue']
colors_set2=[]
for i in range(len(np.unique(decoded_target_labels_matrix))):
    colors_set2.append(darkerhues[i])

#Concatinating dark colors for training data and corresponding light colors for validation data
colors_set2=colors_set2+["lightpink", 'skyblue', 'wheat', "plum","paleturquoise",  "lightgreen",  'burlywood','lightsteelblue']

print('Vizualization for FDC for training data (shown in dark hue) '+ 'and predicted clusters from neural network on validation data (shown in corresponding light hues) ')

#visualizing the clusters of both training and validation data
sns.lmplot( x="UMAP_0", y="UMAP_1", data=two_dim_viz, fit_reg=False, legend=False, hue='Cluster', scatter_kws={"s": 3},palette=sns.set_palette(sns.color_palette(colors_set2))) 
plt.show()

#Metric calculation

CIM_predicted=cluster_incidence_matrix_mod(np.array(decoded_predicted_clusters))#Cluster incidence metric for predicted clusters
CIM_reference=cluster_incidence_matrix_mod(np.array(decoded_ref_clusters))#Cluster incidence metric for reference clusters
Product=np.dot(CIM_predicted,CIM_reference)
cluster_incidences_in_data=np.sum(CIM_reference,axis=1)  
mean_points_in_same_clusters=np.mean(np.diagonal(Product)/cluster_incidences_in_data)
fold_readings.append(mean_points_in_same_clusters*100)

print("Average percentage of patients belongs to the same cluster is: {}%".format(mean_points_in_same_clusters*100))
print('\n')



print('\n')
print('\n')
Epoch 1/30
914/914 [==============================] - 1s 1ms/step - loss: 0.6300 - mse: 0.6300
Epoch 2/30
914/914 [==============================] - 1s 1ms/step - loss: 0.3550 - mse: 0.3550
Epoch 3/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2979 - mse: 0.2979
Epoch 4/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2717 - mse: 0.2717
Epoch 5/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2580 - mse: 0.2580
Epoch 6/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2491 - mse: 0.2491
Epoch 7/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2420 - mse: 0.2420
Epoch 8/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2360 - mse: 0.2360
Epoch 9/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2301 - mse: 0.2301
Epoch 10/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2244 - mse: 0.2244
Epoch 11/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2199 - mse: 0.2199
Epoch 12/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2155 - mse: 0.2155
Epoch 13/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2129 - mse: 0.2129
Epoch 14/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2117 - mse: 0.2117
Epoch 15/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2072 - mse: 0.2072
Epoch 16/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2056 - mse: 0.2056
Epoch 17/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2027 - mse: 0.2027
Epoch 18/30
914/914 [==============================] - 1s 1ms/step - loss: 0.2002 - mse: 0.2002
Epoch 19/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1968 - mse: 0.1968
Epoch 20/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1941 - mse: 0.1941
Epoch 21/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1920 - mse: 0.1920
Epoch 22/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1888 - mse: 0.1888
Epoch 23/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1861 - mse: 0.1861
Epoch 24/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1831 - mse: 0.1831
Epoch 25/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1803 - mse: 0.1803
Epoch 26/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1783 - mse: 0.1783
Epoch 27/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1754 - mse: 0.1754
Epoch 28/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1720 - mse: 0.1720
Epoch 29/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1678 - mse: 0.1678
Epoch 30/30
914/914 [==============================] - 1s 1ms/step - loss: 0.1666 - mse: 0.1666


Training history across epochs for training data 
2022-08-17T16:44:45.966300 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Epoch 1/30
914/914 [==============================] - 2s 1ms/step - loss: 0.0741 - mse: 0.0741
Epoch 2/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0336 - mse: 0.0336
Epoch 3/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0238 - mse: 0.0238
Epoch 4/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0191 - mse: 0.0191
Epoch 5/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0163 - mse: 0.0163
Epoch 6/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0154 - mse: 0.0154
Epoch 7/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0150 - mse: 0.0150
Epoch 8/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0148 - mse: 0.0148
Epoch 9/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0149 - mse: 0.0149
Epoch 10/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0145 - mse: 0.0145
Epoch 11/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0144 - mse: 0.0144
Epoch 12/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0143 - mse: 0.0143
Epoch 13/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0142 - mse: 0.0142
Epoch 14/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0142 - mse: 0.0142
Epoch 15/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0141 - mse: 0.0141
Epoch 16/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0139 - mse: 0.0139
Epoch 17/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0137 - mse: 0.0137
Epoch 18/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0136 - mse: 0.0136
Epoch 19/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0133 - mse: 0.0133
Epoch 20/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0130 - mse: 0.0130
Epoch 21/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0125 - mse: 0.0125
Epoch 22/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0122 - mse: 0.0122
Epoch 23/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0121 - mse: 0.0121
Epoch 24/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0116 - mse: 0.0116
Epoch 25/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0116 - mse: 0.0116
Epoch 26/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0113 - mse: 0.0113
Epoch 27/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0113 - mse: 0.0113
Epoch 28/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0108 - mse: 0.0108
Epoch 29/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0107 - mse: 0.0107
Epoch 30/30
914/914 [==============================] - 1s 1ms/step - loss: 0.0105 - mse: 0.0105


Training history across epochs for training data 
2022-08-17T16:45:18.014783 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
77/77 [==============================] - 0s 801us/step
77/77 [==============================] - 0s 938us/step
Vizualization for FDC for training data (shown in dark hue) and predicted clusters from neural network on validation data (shown in corresponding light hues) 
2022-08-17T16:45:46.544242 image/svg+xml Matplotlib v3.5.1, https://matplotlib.org/
Average percentage of patients belongs to the same cluster is: 89.85713770110935%






In [ ]: